Goto

Collaborating Authors

 openai co-founder


OpenAI co-founder who had key role in attempted firing of Sam Altman departs

The Guardian

OpenAI's co-founder and chief scientist, Ilya Sutskever, is leaving the startup at the center of today's artificial intelligence boom. "After almost a decade, I have made the decision to leave OpenAI," Sutskever said in a post on X. Sutskever played a key role in the dramatic firing and rehiring in November last year of OpenAI's CEO, Sam Altman. At the time, Sutskever was on the board of OpenAI and helped to orchestrate Altman's firing. Days later, he reversed course, signing on to an employee letter demanding Altman's return and expressing regret for his "participation in the board's actions". After Altman returned, Sutskever was removed from the board, and his position at the company became unclear.


OpenAI co-founder and Chief Scientist Ilya Sutskever is leaving the company

Engadget

Ilya Sutskever has announced on X, formerly known as Twitter, that he's leaving OpenAI almost a decade after he co-founded the company. He's confident that OpenAI "will build [artificial general intelligence] that is both safe and beneficial" under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati, he continued. In his own post about Sutskever's departure, Altman called him "one of the greatest minds of our generation" and credited him for his work with the company. Jakub Pachocki, OpenAI's previous Director of Research who headed the development of GPT-4 and OpenAI Five, has taken Sutskever's role as Chief Scientist. After almost a decade, I have made the decision to leave OpenAI.


OpenAI co-founder warns 'superintelligent' AI must be controlled to prevent possible human extinction

FOX News

American Accountability Foundation spokesman Robert Donachie says the left is trying to use AI to'push their agenda on the American people.' A co-founder of artificial intelligence leader OpenAI is warning that superintelligence must be controlled in order to prevent the extinction of the human race. "Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world's most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction," Ilya Sutskever and head of alignment Jan Leike wrote in a Tuesday blog post, saying they believe such advancements could arrive as soon as this decade. They said managing such risks would require new institutions for governance and solving the problem of superintelligence alignment: ensuring AI systems much smarter than humans "follow human intent."


OpenAI co-founder on company's past approach to openly sharing research: 'We were wrong' - The Verge

#artificialintelligence

The closed approach is a marked change for OpenAI, which was founded in 2015 by a small group including current CEO Sam Altman, Tesla CEO Elon Musk (who resigned from its board in 2018), and Sutskever. In an introductory blog post, Sutskever and others said the organization's aim was to "build value for everyone rather than shareholders" and that it would "freely collaborate" with others in the field to do so. OpenAI was founded as a nonprofit but later became a "capped profit" in order to secure billions in investment, primarily from Microsoft, with whom it now has exclusive business licenses.